Catastrophic Forgetting in Connectionist Networks: Causes, Consequences and Solutions

نویسنده

  • Robert M. French
چکیده

All natural cognitive systems, and, in particular, our own, gradually forget previously learned information. Consequently, plausible models of human cognition should exhibit similar patterns of gradual forgetting old information as new information is acquired. Only rarely (see Box 3) does new learning in natural cognitive systems completely disrupt or erase previously learned information. In other words, natural cognitive systems do not, in general, forget catastrophically. Unfortunately, however, this is precisely what occurs under certain circumstances in distributed connectionist networks. It turns out that the very features that give these networks their much-touted abilities to generalize, to function in the presence of degraded input, etc., are the root cause of catastrophic forgetting. The challenge is how to keep the advantages of distributed connectionist networks while avoiding the problem of catastrophic forgetting. In this article, we examine the causes, consequences and numerous solutions to the problem of catastrophic forgetting in neural networks. We consider how the brain might have overcome this problem and explore the consequences of this solution. Introduction By the end of the 1980’s many of the early problems with connectionist networks, such as their difficulties with sequence-learning and the profoundly stimulus-response nature of supervised learning algorithms such as error backpropagation had been largely solved. However, as these problems were being solved, another was discovered by McCloskey and Cohen and Ratcliff. They suggested that there might be a fundamental limitation to this type of distributed architecture, in the same way that Minsky and Papert had shown twenty years before that there were certain fundamental limitations to what a perceptron could do. They observed that under certain conditions, the process of learning a new set of patterns suddenly and completely erased a network’s knowledge of what it had already learned. They referred to this phenomenon as catastrophic interference (or catastrophic forgetting) and suggested that the underlying reason for this difficulty was the very thing — a single set of shared weights — that gave the networks their remarkable abilities to generalize and degrade gracefully. Catastrophic interference is a radical manifestation of a more general problem for connectionist models of memory — in fact, for any model of memory —, the so-called “stability-plasticity” problem. The problem is how to design a system that is simultaneously sensitive to, but not radically disrupted by, new input. In this article we will focus primarily on a particular, widely used class of distributed neural network architectures — namely, those with a single set of shared (or partially shared) multiplicative weights. While this defines a very broad class of networks, this definition is certainly not exhaustive. In the remainder of this article we will discuss the numerous attempts over the last decade to solve this problem within the context of this type of network.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Catastrophic forgetting in connectionist networks.

All natural cognitive systems, and, in particular, our own, gradually forget previously learned information. Plausible models of human cognition should therefore exhibit similar patterns of gradual forgetting of old information as new information is acquired. Only rarely does new learning in natural cognitive systems completely disrupt or erase previously learned information; that is, natural c...

متن کامل

Catastrophic interference in connectionist networks

Introduction Catastrophic forgetting vs. normal forgetting Measures of catastrophic interference Solutions to the problem Rehearsal and pseudorehearsal Other techniques for alleviating catastrophic forgetting in neural networks Summary

متن کامل

Catastrophic Interference in Connectionist Networks: Can It Be Predicted, Can It Be Prevented?

Catastrophic forgetting occurs when connectionist networks learn new information, and by so doing, forget all previously learned information. This workshop focused primarily on the causes of catastrophic interference, the techniques that have been developed to reduce it, the effect of these techniques on the networks' ability to generalize, and the degree to which prediction of catastrophic for...

متن کامل

Using Semi-Distributed Representations to Overcome Catastrophic Forgetting in Connectionist Networks

In connectionist networks, newly-learned information destroys previously-learned information unless the network is continually retrained on the old information. This behavior, known as catastrophic forgetting, is unacceptable both for practical purposes and as a model of mind. This paper advances the claim that catastrophic forgetting is a direct consequence of the overlap of the system’s distr...

متن کامل

Synfire Chains and Catastrophic Interference

The brain must be capable of achieving extraordinarily precise sub-millisecond timing with imprecise neural hardware. We discuss how this might be possible using synfire chains (Abeles, 1991) and present a synfire chain learning algorithm for a sparsely-distributed network of spiking neurons (Sougné, 1999). Surprisingly, we show that this learning is not subject to catastrophic interference, a ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1994